set.seed(0)
knitr::opts_chunk$set(echo = FALSE, warning= FALSE, message=FALSE,
cache=TRUE,cache.extra = knitr::rand_seed,
dpi = 300, fig.width = 13.3, fig.height = 10)
#library(viridis)
library(tidyverse)
## Registered S3 method overwritten by 'rvest':
## method from
## read_xml.response xml2
## ── Attaching packages ──────────────────────────────────────────────── tidyverse 1.2.1 ──
## ✔ ggplot2 3.2.1 ✔ purrr 0.3.2
## ✔ tibble 2.1.3 ✔ dplyr 0.8.3
## ✔ tidyr 1.0.0 ✔ stringr 1.4.0
## ✔ readr 1.3.1 ✔ forcats 0.4.0
## ── Conflicts ─────────────────────────────────────────────────── tidyverse_conflicts() ──
## ✖ dplyr::filter() masks stats::filter()
## ✖ dplyr::lag() masks stats::lag()
library(scales)
##
## Attaching package: 'scales'
## The following object is masked from 'package:purrr':
##
## discard
## The following object is masked from 'package:readr':
##
## col_factor
library(lubridate)
##
## Attaching package: 'lubridate'
## The following object is masked from 'package:base':
##
## date
options(scipen = 999)
library(latex2exp)
MIN_VALID_YEAR <- 6
variable_of_interest<- "Bt/K Lutjanus malabaricus"
x_labeller<- scales::percent
## [1] 594
## [1] 174
###Simplified model:
Things we don’t know:
Things we know:
Approach:
Benefits: * No need for population reconstruction * Clear uncertainty propagation * Clear value of information Cost: * Model needs to be fast
Results * Ran the model for about 50,000 times * Accepted about 0.5% of them * This is their story….
## [1] 266
First basic result:
ADD MORE:
We don’t want just to assess, we want to know policy effects:
Works as before but: * apply same policy to all accepted runs
Positive: * study the average effect * understanding the uncertainty Negative: * to make them comparable, speak only in relative terms